7 research outputs found

    A neural marker for social bias toward in-group accents

    Get PDF
    Accents provide information about the speaker's geographical, socio-economic, and ethnic background. Research in applied psychology and sociolinguistics suggests that we generally prefer our own accent to other varieties of our native language and attribute more positive traits to it. Despite the widespread influence of accents on social interactions, educational and work settings the neural underpinnings of this social bias toward our own accent and, what may drive this bias, are unexplored. We measured brain activity while participants from two different geographical backgrounds listened passively to 3 English accent types embedded in an adaptation design. Cerebral activity in several regions, including bilateral amygdalae, revealed a significant interaction between the participants' own accent and the accent they listened to: while repetition of own accents elicited an enhanced neural response, repetition of the other group's accent resulted in reduced responses classically associated with adaptation. Our findings suggest that increased social relevance of, or greater emotional sensitivity to in-group accents, may underlie the own-accent bias. Our results provide a neural marker for the bias associated with accents, and show, for the first time, that the neural response to speech is partly shaped by the geographical background of the listener

    Norm-based coding of voice identity in human auditory cortex

    Get PDF
    Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social interactions. The human brain contains temporal voice areas (TVA) [1] involved in an acoustic-based representation of voice identity [2, 3, 4, 5 and 6], but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism [4, 7, 8, 9, 10 and 11]. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)—approximated here by averaging a large number of same-gender voices by using morphing [12]. Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices—a phenomenon not merely explained by neuronal adaptation [13 and 14]. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity

    A Neural Marker for Social Bias Toward In-group Accents

    No full text

    Functional mapping of the human auditory cortex

    Get PDF
    Objective: To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. Background: The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker’s facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. Methods: We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Results: Sounds activated the caudal sub-area of M.L.’s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Conclusions: Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.’s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream

    Right temporal TMS impairs voice detection

    Get PDF
    Functional magnetic resonance imaging (fMRI) research has revealed bilateral cortical regions along the upper banks of the superior temporal sulci (STS) which respond preferentially to voices compared to non-vocal, environmental sounds [1] and [2] . This sensitivity is particularly pronounced in the right hemisphere. Voice perception models imply that these regions, referred to as the temporal voice areas (TVAs), could correspond to a first stage of voice-specific processing in auditory cortex [3] and [4] , after which different types of vocal information are processed in interacting but partially independent functional pathways. However, clear causal evidence for this claim is missing. Here we provide the first direct link between TVA activity and voice detection ability using repetitive transcranial magnetic stimulation (rTMS). Voice/non-voice discrimination ability was impaired when rTMS was targeted at the right TVA compared with a control site. In contrast, a lower-level loudness judgement task was not differentially affected by site of stimulation. Results imply that neuronal computations in the right TVA are necessary for the distinction between human voices and other, non-vocal sounds
    corecore